attack sample
Feature Selection via GANs (GANFS): Enhancing Machine Learning Models for DDoS Mitigation
Distributed Denial of Service (DDoS) attacks represent a persistent and evolving threat to modern networked systems, capable of causing large-scale service disruptions. The complexity of such attacks, often hidden within high-dimensional and redundant network traffic data, necessitates robust and intelligent feature selection techniques for effective detection. Traditional methods such as filter-based, wrapper-based, and embedded approaches, each offer strengths but struggle with scalability or adaptability in complex attack environments. In this study, we explore these existing techniques through a detailed comparative analysis and highlight their limitations when applied to large-scale DDoS detection tasks. Building upon these insights, we introduce a novel Generative Adversarial Network-based Feature Selection (GANFS) method that leverages adversarial learning dynamics to identify the most informative features. By training a GAN exclusively on attack traffic and employing a perturbation-based sensitivity analysis on the Discriminator, GANFS effectively ranks feature importance without relying on full supervision. Experimental evaluations using the CIC-DDoS2019 dataset demonstrate that GANFS not only improves the accuracy of downstream classifiers but also enhances computational efficiency by significantly reducing feature dimensionality. These results point to the potential of integrating generative learning models into cybersecurity pipelines to build more adaptive and scalable detection systems.
- North America > Canada > Ontario > National Capital Region > Ottawa (0.04)
- North America > Canada > New Brunswick > Fredericton (0.04)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Backdoor attacks on DNN and GBDT -- A Case Study from the insurance domain
Kühlem, Robin, Otten, Daniel, Ludwig, Daniel, Hudde, Anselm, Rosenbaum, Alexander, Mauthe, Andreas
Machine learning (ML) will likely play a large role in many processes in the future, also for insurance companies. However, ML models are at risk of being attacked and manipulated. In this work, the robustness of Gradient Boosted Decision Tree (GBDT) models and Deep Neural Networks (DNN) within an insurance context will be evaluated. Therefore, two GBDT models and two DNNs are trained on two different tabular datasets from an insurance context. Past research in this domain mainly used homogenous data and there are comparably few insights regarding heterogenous tabular data. The ML tasks performed on the datasets are claim prediction (regression) and fraud detection (binary classification). For the backdoor attacks different samples containing a specific pattern were crafted and added to the training data. It is shown, that this type of attack can be highly successful, even with a few added samples. The backdoor attacks worked well on the models trained on one dataset but poorly on the models trained on the other. In real-world scenarios the attacker will have to face several obstacles but as attacks can work with very few added samples this risk should be evaluated.
- Europe > Germany (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- Law Enforcement & Public Safety (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Insurance (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.67)
TF-Attack: Transferable and Fast Adversarial Attacks on Large Language Models
Li, Zelin, Chen, Kehai, Liu, Lemao, Bai, Xuefeng, Yang, Mingming, Xiang, Yang, Zhang, Min
With the great advancements in large language models (LLMs), adversarial attacks against LLMs have recently attracted increasing attention. We found that pre-existing adversarial attack methodologies exhibit limited transferability and are notably inefficient, particularly when applied to LLMs. In this paper, we analyze the core mechanisms of previous predominant adversarial attack methods, revealing that 1) the distributions of importance score differ markedly among victim models, restricting the transferability; 2) the sequential attack processes induces substantial time overheads. Based on the above two insights, we introduce a new scheme, named TF-Attack, for Transferable and Fast adversarial attacks on LLMs. TF-Attack employs an external LLM as a third-party overseer rather than the victim model to identify critical units within sentences. Moreover, TF-Attack introduces the concept of Importance Level, which allows for parallel substitutions of attacks. We conduct extensive experiments on 6 widely adopted benchmarks, evaluating the proposed method through both automatic and human metrics. Results show that our method consistently surpasses previous methods in transferability and delivers significant speed improvements, up to 20 times faster than earlier attack strategies.
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
LEMDA: A Novel Feature Engineering Method for Intrusion Detection in IoT Systems
Ghubaish, Ali, Yang, Zebo, Erbad, Aiman, Jain, Raj
Intrusion detection systems (IDS) for the Internet of Things (IoT) systems can use AI-based models to ensure secure communications. IoT systems tend to have many connected devices producing massive amounts of data with high dimensionality, which requires complex models. Complex models have notorious problems such as overfitting, low interpretability, and high computational complexity. Adding model complexity penalty (i.e., regularization) can ease overfitting, but it barely helps interpretability and computational efficiency. Feature engineering can solve these issues; hence, it has become critical for IDS in large-scale IoT systems to reduce the size and dimensionality of data, resulting in less complex models with excellent performance, smaller data storage, and fast detection. This paper proposes a new feature engineering method called LEMDA (Light feature Engineering based on the Mean Decrease in Accuracy). LEMDA applies exponential decay and an optional sensitivity factor to select and create the most informative features. The proposed method has been evaluated and compared to other feature engineering methods using three IoT datasets and four AI/ML models. The results show that LEMDA improves the F1 score performance of all the IDS models by an average of 34% and reduces the average training and detection times in most cases.
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- Asia > Singapore (0.04)
- Asia > Middle East > Qatar (0.04)
- (3 more...)
A Dual-Tier Adaptive One-Class Classification IDS for Emerging Cyberthreats
Uddin, Md. Ashraf, Aryal, Sunil, Bouadjenek, Mohamed Reda, Al-Hawawreh, Muna, Talukder, Md. Alamin
In today's digital age, our dependence on IoT (Internet of Things) and IIoT (Industrial IoT) systems has grown immensely, which facilitates sensitive activities such as banking transactions and personal, enterprise data, and legal document exchanges. Cyberattackers consistently exploit weak security measures and tools. The Network Intrusion Detection System (IDS) acts as a primary tool against such cyber threats. However, machine learning-based IDSs, when trained on specific attack patterns, often misclassify new emerging cyberattacks. Further, the limited availability of attack instances for training a supervised learner and the ever-evolving nature of cyber threats further complicate the matter. This emphasizes the need for an adaptable IDS framework capable of recognizing and learning from unfamiliar/unseen attacks over time. In this research, we propose a one-class classification-driven IDS system structured on two tiers. The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown. Within this second tier, we also embed a multi-classification mechanism coupled with a clustering algorithm. This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks. This enables our model to be future-proofed, capable of evolving with emerging threat patterns. Leveraging one-class classifiers (OCC) at the first level, our approach bypasses the need for attack samples, addressing data imbalance and zero-day attack concerns and OCC at the second level can effectively separate unknown attacks from the known attacks. Our methodology and evaluations indicate that the presented framework exhibits promising potential for real-world deployments.
- Oceania > Australia > New South Wales (0.04)
- North America > Canada > New Brunswick > Fredericton (0.04)
- Asia > Taiwan > Taiwan Province > Taipei (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.48)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Communications > Networks (1.00)
- (4 more...)
Collaborative Learning for Cyberattack Detection in Blockchain Networks
Khoa, Tran Viet, Son, Do Hai, Hoang, Dinh Thai, Trung, Nguyen Linh, Quynh, Tran Thi Thuy, Nguyen, Diep N., Ha, Nguyen Viet, Dutkiewicz, Eryk
This article aims to study intrusion attacks and then develop a novel cyberattack detection framework for blockchain networks. Specifically, we first design and implement a blockchain network in our laboratory. This blockchain network will serve two purposes, i.e., to generate the real traffic data (including both normal data and attack data) for our learning models and implement real-time experiments to evaluate the performance of our proposed intrusion detection framework. To the best of our knowledge, this is the first dataset that is synthesized in a laboratory for cyberattacks in a blockchain network. We then propose a novel collaborative learning model that allows efficient deployment in the blockchain network to detect attacks. The main idea of the proposed learning model is to enable blockchain nodes to actively collect data, share the knowledge learned from its data, and then exchange the knowledge with other blockchain nodes in the network. In this way, we can not only leverage the knowledge from all the nodes in the network but also do not need to gather all raw data for training at a centralized node like conventional centralized learning solutions. Such a framework can also avoid the risk of exposing local data's privacy as well as the excessive network overhead/congestion. Both intensive simulations and real-time experiments clearly show that our proposed collaborative learning-based intrusion detection framework can achieve an accuracy of up to 97.7% in detecting attacks.
- Oceania > Australia > New South Wales > Sydney (0.14)
- Asia > North Korea (0.04)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- (4 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Cheng, Siyuan, Tao, Guanhong, Liu, Yingqi, An, Shengwei, Xu, Xiangzhe, Feng, Shiwei, Shen, Guangyu, Zhang, Kaiyuan, Xu, Qiuling, Ma, Shiqing, Zhang, Xiangyu
Deep Learning backdoor attacks have a threat model similar to traditional cyber attacks. Attack forensics, a critical counter-measure for traditional cyber attacks, is hence of importance for defending model backdoor attacks. In this paper, we propose a novel model backdoor forensics technique. Given a few attack samples such as inputs with backdoor triggers, which may represent different types of backdoors, our technique automatically decomposes them to clean inputs and the corresponding triggers. It then clusters the triggers based on their properties to allow automatic attack categorization and summarization. Backdoor scanners can then be automatically synthesized to find other instances of the same type of backdoor in other models. Our evaluation on 2,532 pre-trained models, 10 popular attacks, and comparison with 9 baselines show that our technique is highly effective. The decomposed clean inputs and triggers closely resemble the ground truth. The synthesized scanners substantially outperform the vanilla versions of existing scanners that can hardly generalize to different kinds of attacks.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > Nepal (0.04)
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks
Gupta, Kishor Datta, Akhtar, Zahid, Dasgupta, Dipankar
Developing secure machine learning models from adversarial examples is challenging as various methods are continually being developed to generate adversarial attacks. In this work, we propose an evolutionary approach to automatically determine Image Processing Techniques Sequence (IPTS) for detecting malicious inputs. Accordingly, we first used a diverse set of attack methods including adaptive attack methods (on our defense) to generate adversarial samples from the clean dataset. A detection framework based on a genetic algorithm (GA) is developed to find the optimal IPTS, where the optimality is estimated by different fitness measures such as Euclidean distance, entropy loss, average histogram, local binary pattern and loss functions. The "image difference" between the original and processed images is used to extract the features, which are then fed to a classification scheme in order to determine whether the input sample is adversarial or clean. This paper described our methodology and performed experiments using multiple data-sets tested with several adversarial attacks. For each attack-type and dataset, it generates unique IPTS. A set of IPTS selected dynamically in testing time which works as a filter for the adversarial attack. Our empirical experiments exhibited promising results indicating the approach can efficiently be used as processing for any AI model.
- North America > United States > Tennessee > Shelby County > Memphis (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
Expansion of Cyber Attack Data From Unbalanced Datasets Using Generative Techniques
Machine learning techniques help to understand patterns of a dataset to create a defense mechanism against cyber attacks. However, it is difficult to construct a theoretical model due to the imbalances in the dataset for discriminating attacks from the overall dataset. Multilayer Perceptron (MLP) technique will provide improvement in accuracy and increase the performance of detecting the attack and benign data from a balanced dataset. We have worked on the UGR'16 dataset publicly available for this work. Data wrangling has been done due to prepare test set from in the original set. We fed the neural network classifier larger input to the neural network in an increasing manner (i.e. 10000, 50000, 1 million) to see the distribution of features over the accuracy. We have implemented a GAN model that can produce samples of different attack labels (e.g. blacklist, anomaly spam, ssh scan). We have been able to generate as many samples as necessary based on the data sample we have taken from the UGR'16. We have tested the accuracy of our model with the imbalance dataset initially and then with the increasing the attack samples and found improvement of classification performance for the latter.
- North America > United States > Tennessee > Putnam County > Cookeville (0.04)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- Europe > Belgium > Flanders > Antwerp Province > Antwerp (0.04)
- (3 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.70)
Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization
Khamis, Rana Abou, Shafiq, Omair, Matrawy, Ashraf
-- With the growth of adversarial attacks against machine learning models, several concerns have emerged about potential vulnerabilities in designing deep neural network-based intrusion detection systems (IDS). In this paper, we study the resilience of deep learning-based intrusion detection systems against adversarial attacks. We apply the min-max (or saddle-point) approach to train intrusion detection systems against adversarial attack samples in NSW-NB 15 dataset. We have the max approach for generating adversarial samples that achieves maximum loss and attack deep neural networks. On the other side, we utilize the existing min approach [2] [9] as a defense strategy to optimize intrusion detection systems that minimize the loss of the incorporated adversarial samples during the adversarial training. We study and measure the effectiveness of the adversarial attack methods as well as the resistance of the adversarially trained models against such attacks. We find that the adversarial attack methods that were designed in binary domains can be used in continuous domains and exhibit different misclassification levels. We finally show that principal component analysis (PCA) based feature reduction can boost the robustness in intrusion detection system (IDS) using a deep neural network (DNN). The Security applications of deep neural networks (DNNs) like Intrusion Detection System (IDS), malware detection, spam-filtering have become essentials in designing tasks for data protection, classification, and prediction.